Existing natural language understanding (NLU) models often rely on dataset biases rather than intended task-relevant features to achieve high performance on specific datasets. As a result, these models perform poorly on datasets outside the training distribution. Some recent studies address the above issue by reducing the weights of biased samples during the training process. However, these methods still encode biased latent features in representations and neglect the dynamic nature of bias, which hinders model prediction. We propose an NLU debiasing method, named debiasing contrastive learning (DCT), to simultaneously alleviate the above problems based on contrastive learning. We devise a debiasing positive sampling strategy to mitigate biased latent features by selecting the least similar biased positive samples. We also propose a dynamic negative sampling strategy to capture the dynamic influence of biases by employing a bias-only model to dynamically select the most similar biased negative samples. We conduct experiments on three NLU benchmark datasets. Experimental results show that DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance. We also verify that DCT can reduce biased latent features from the model's representations.
translated by 谷歌翻译
将现有的旅游照片从部分捕获的场景扩展到完整的场景是摄影应用的理想体验之一。尽管对照片的外推进行了充分的研究,但是将照片(即自拍照)从狭窄的视野推断到更广阔的视野,同时保持相似的视觉样式是更具挑战性的。在本文中,我们提出了一个分解的神经重新渲染模型,以从混乱的户外互联网照片集中产生逼真的新颖观点,该视图可以使应用程序包括可控场景重新渲染,照片外推甚至外推3D照片生成。具体而言,我们首先开发出一种新颖的分解重新渲染管道,以处理几何,外观和照明分解中的歧义。我们还提出了一种合成的培训策略,以应对互联网图像中意外的阻塞。此外,为了推断旅游照片时增强照片现实主义,我们提出了一个新颖的现实主义增强过程来补充外观细节,该过程会自动传播质地细节,从狭窄的捕获照片到外推神经渲染图像。室外场景上的实验和照片编辑示例证明了我们在照片现实主义和下游应用中提出的方法的出色性能。
translated by 谷歌翻译
FreeSpace检测是自动驾驶技术的重要组成部分,并且在轨迹计划中起着重要作用。在过去的十年中,已证明基于深度学习的自由空间检测方法可行。但是,这些努力集中在城市道路环境上,由于缺乏越野基准,很少有针对越野自由空间检测专门设计的深度学习方法。在本文中,我们介绍了ORFD数据集,据我们所知,该数据集是第一个越野自由空间检测数据集。数据集收集在不同的场景(林地,农田,草地和乡村),不同的天气条件(阳光,多雨,雾气和雪地)以及不同的光线条件(明亮的光线,日光,暮光,黑暗)中,完全包含12,198 LIDAR点云和RGB图像对与可穿越的区域,不可传输区域和无法达到的区域进行了详细注释。我们提出了一个名为Off-NET的新型网络,该网络将变压器体系结构统一以汇总本地和全球信息,以满足大型接收领域的自由空间检测任务的要求。我们还向动态融合激光雷达和RGB图像信息提出了交叉注意,以进行准确的越野自由空间检测。数据集和代码可公开可用athttps://github.com/chaytonmin/off-net。
translated by 谷歌翻译
基于面具的预训练在没有手动注释的监督的情况下,在图像,视频和语言中进行自我监督的学习取得了巨大的成功。但是,作为信息冗余数据,尚未在3D对象检测的字段中进行研究。由于3D对象检测中的点云是大规模的,因此无法重建输入点云。在本文中,我们提出了一个蒙版素分类网络,用于预训练大规模点云。我们的关键思想是将点云分为体素表示,并分类体素是否包含点云。这种简单的策略使网络是对物体形状的体素意识,从而改善了3D对象检测的性能。广泛的实验显示了我们在三个流行数据集(Kitti,Waymo和Nuscenes)上使用3D对象检测器(第二,Centerpoint和PV-RCNN)的预训练模型的效果。代码可在https://github.com/chaytonmin/voxel-mae上公开获得。
translated by 谷歌翻译
肖像照片修饰是一种照片修饰任务,强调人类区域优先和组级一致性。基于查找表的方法通过学习图像自适应权重来实现三维查找表(3D LUT)并导通像素到像素颜色变换来实现对润转性能有前途的矫正性能。但是,此范例忽略了本地上下文提示,并且当它们表现出相同的原始RGB值时,将相同的转换应用于纵向像素和背景像素。相比之下,专家通常进行不同的操作来调整肖像区域和背景区域的色温和音调。这激励我们建模本地上下文提示,明确改善修饰质量。首先,我们考虑一种图像补丁并预测像素自适应查找表权重,以精确地润饰中心像素。其次,由于相邻像素对中心像素表现出不同的亲和力,我们估计当地注意掩模以调制相邻像素的影响。第三,通过应用监督,可以进一步提高本地注意掩模的质量,该监督基于由地面肖像掩模计算的亲和图。对于组级一致性,我们建议直接限制实验室空间中平均颜色组件的方差。 PPR10K数据集的广泛实验验证了我们方法的有效性,例如,在高分辨率照片上,PSNR度量超过0.5的收益,而组级一致性度量获得至少2.1减少。
translated by 谷歌翻译
作为在线广告和标记的关键组成部分,点击率(CTR)预测引起了行业和学术界领域的许多关注。最近,深度学习已成为CTR的主流方法论。尽管做出了可持续的努力,但现有的方法仍然构成了一些挑战。一方面,功能之间的高阶相互作用尚未探索。另一方面,高阶相互作用可能会忽略低阶字段的语义信息。在本文中,我们提出了一种名为Fint的新型预测方法,该方法采用了现场感知的交互层,该层捕获了高阶功能交互,同时保留了低阶现场信息。为了凭经验研究金融的有效性和鲁棒性,我们对三个现实数据库进行了广泛的实验:KDD2012,Criteo和Avazu。获得的结果表明,与现有方法相比,该五颗粒可以显着提高性能,而无需增加所需的计算量。此外,提出的方法通过A/B测试使大型在线视频应用程序的广告收入增加了约2.72 \%。为了更好地促进CTR领域的研究,我们发布了我们的代码以及参考实施,网址为:https://github.com/zhishan01/fint。
translated by 谷歌翻译
In recent years, Graph Neural Networks (GNNs), which can naturally integrate node information and topological structure, have been demonstrated to be powerful in learning on graph data. These advantages of GNNs provide great potential to advance social recommendation since data in social recommender systems can be represented as user-user social graph and user-item graph; and learning latent factors of users and items is the key. However, building social recommender systems based on GNNs faces challenges. For example, the user-item graph encodes both interactions and their associated opinions; social relations have heterogeneous strengths; users involve in two graphs (e.g., the useruser social graph and the user-item graph). To address the three aforementioned challenges simultaneously, in this paper, we present a novel graph neural network framework (GraphRec) for social recommendations. In particular, we provide a principled approach to jointly capture interactions and opinions in the user-item graph and propose the framework GraphRec, which coherently models two graphs and heterogeneous strengths. Extensive experiments on two real-world datasets demonstrate the effectiveness of the proposed framework GraphRec.
translated by 谷歌翻译
Managing novelty in perception-based human activity recognition (HAR) is critical in realistic settings to improve task performance over time and ensure solution generalization outside of prior seen samples. Novelty manifests in HAR as unseen samples, activities, objects, environments, and sensor changes, among other ways. Novelty may be task-relevant, such as a new class or new features, or task-irrelevant resulting in nuisance novelty, such as never before seen noise, blur, or distorted video recordings. To perform HAR optimally, algorithmic solutions must be tolerant to nuisance novelty, and learn over time in the face of novelty. This paper 1) formalizes the definition of novelty in HAR building upon the prior definition of novelty in classification tasks, 2) proposes an incremental open world learning (OWL) protocol and applies it to the Kinetics datasets to generate a new benchmark KOWL-718, 3) analyzes the performance of current state-of-the-art HAR models when novelty is introduced over time, 4) provides a containerized and packaged pipeline for reproducing the OWL protocol and for modifying for any future updates to Kinetics. The experimental analysis includes an ablation study of how the different models perform under various conditions as annotated by Kinetics-AVA. The protocol as an algorithm for reproducing experiments using the KOWL-718 benchmark will be publicly released with code and containers at https://github.com/prijatelj/human-activity-recognition-in-an-open-world. The code may be used to analyze different annotations and subsets of the Kinetics datasets in an incremental open world fashion, as well as be extended as further updates to Kinetics are released.
translated by 谷歌翻译
As a neural network compression technique, post-training quantization (PTQ) transforms a pre-trained model into a quantized model using a lower-precision data type. However, the prediction accuracy will decrease because of the quantization noise, especially in extremely low-bit settings. How to determine the appropriate quantization parameters (e.g., scaling factors and rounding of weights) is the main problem facing now. Many existing methods determine the quantization parameters by minimizing the distance between features before and after quantization. Using this distance as the metric to optimize the quantization parameters only considers local information. We analyze the problem of minimizing local metrics and indicate that it would not result in optimal quantization parameters. Furthermore, the quantized model suffers from overfitting due to the small number of calibration samples in PTQ. In this paper, we propose PD-Quant to solve the problems. PD-Quant uses the information of differences between network prediction before and after quantization to determine the quantization parameters. To mitigate the overfitting problem, PD-Quant adjusts the distribution of activations in PTQ. Experiments show that PD-Quant leads to better quantization parameters and improves the prediction accuracy of quantized models, especially in low-bit settings. For example, PD-Quant pushes the accuracy of ResNet-18 up to 53.08% and RegNetX-600MF up to 40.92% in weight 2-bit activation 2-bit. The code will be released at https://github.com/hustvl/PD-Quant.
translated by 谷歌翻译
Most action recognition datasets and algorithms assume a closed world, where all test samples are instances of the known classes. In open set problems, test samples may be drawn from either known or unknown classes. Existing open set action recognition methods are typically based on extending closed set methods by adding post hoc analysis of classification scores or feature distances and do not capture the relations among all the video clip elements. Our approach uses the reconstruction error to determine the novelty of the video since unknown classes are harder to put back together and thus have a higher reconstruction error than videos from known classes. We refer to our solution to the open set action recognition problem as "Humpty Dumpty", due to its reconstruction abilities. Humpty Dumpty is a novel graph-based autoencoder that accounts for contextual and semantic relations among the clip pieces for improved reconstruction. A larger reconstruction error leads to an increased likelihood that the action can not be reconstructed, i.e., can not put Humpty Dumpty back together again, indicating that the action has never been seen before and is novel/unknown. Extensive experiments are performed on two publicly available action recognition datasets including HMDB-51 and UCF-101, showing the state-of-the-art performance for open set action recognition.
translated by 谷歌翻译